LiDAR mapping is important yet challenging in self-driving and mobile robotics. To tackle such a global point cloud registration problem, DeepMapping converts the complex map estimation into a self-supervised training of simple deep networks. Despite its broad convergence range on small datasets, DeepMapping still cannot produce satisfactory results on large-scale datasets with thousands of frames. This is due to the lack of loop closures and exact cross-frame point correspondences, and the slow convergence of its global localization network. We propose DeepMapping2 by adding two novel techniques to address these issues: (1) organization of training batch based on map topology from loop closing, and (2) self-supervised local-to-global point consistency loss leveraging pairwise registration. Our experiments and ablation studies on public datasets (KITTI, NCLT, and Nebula) demonstrate the effectiveness of our method. Our code will be released.
translated by 谷歌翻译
Automated audio captioning is a cross-modal translation task for describing the content of audio clips with natural language sentences. This task has attracted increasing attention and substantial progress has been made in recent years. Captions generated by existing models are generally faithful to the content of audio clips, however, these machine-generated captions are often deterministic (e.g., generating a fixed caption for a given audio clip), simple (e.g., using common words and simple grammar), and generic (e.g., generating the same caption for similar audio clips). When people are asked to describe the content of an audio clip, different people tend to focus on different sound events and describe an audio clip diversely from various aspects using distinct words and grammar. We believe that an audio captioning system should have the ability to generate diverse captions, either for a fixed audio clip, or across similar audio clips. To this end, we propose an adversarial training framework based on a conditional generative adversarial network (C-GAN) to improve diversity of audio captioning systems. A caption generator and two hybrid discriminators compete and are learned jointly, where the caption generator can be any standard encoder-decoder captioning model used to generate captions, and the hybrid discriminators assess the generated captions from different criteria, such as their naturalness and semantics. We conduct experiments on the Clotho dataset. The results show that our proposed model can generate captions with better diversity as compared to state-of-the-art methods.
translated by 谷歌翻译
使用深网的Visual Place识别(VPR)已达到最先进的性能。但是,他们中的大多数都需要采用地面真相传感器姿势的培训,以获取每个观察的空间邻里的正面和负面样本,以进行监督学习。当不可用的信息不可用时,尽管我们发现其性能次优训练,但可以利用从顺序收集的数据流中的时间社区进行自我监督训练。受嘈杂的标签学习的启发,我们提出了一个名为\ textit {tf-vpr}的新颖的自我监督框架,该框架使用时间社区和可学习的特征邻域来发现未知的空间社区。我们的方法遵循一个迭代训练范式,该范式在以下方面交替:(1)与数据增强的表示学习,(2)正设置扩展以包括当前的特征空间邻居,以及(3)通过几何验证进行正面集合。我们在模拟数据集和真实数据集上进行了全面的实验,将RGB图像或点云作为输入进行。结果表明,我们的方法在召回率,稳健性和标题多样性方面优于我们的基准,这是我们为VPR提出的新型指标。可以在https://ai4ce.github.io/tf-vpr/上找到我们的代码和数据集。
translated by 谷歌翻译
几个示例,几乎没有射击的生物声学事件检测是检测新声音的发生时间的任务。先前的方法采用公制学习来建立一个潜在空间,其中包括不同声音类别的标记部分,也称为积极事件。在这项研究中,我们提出了一个细分级的几杆学习框架,该框架在模型优化过程中利用正面和负面事件。负面事件的训练比积极事件更大,可以提高模型的概括能力。此外,我们对训练期间的验证集使用跨性推断,以更好地适应新的课程。我们对我们提出的方法进行消融研究,并在输入特征,训练数据和超参数上进行不同的设置。我们的最终系统在DCASE 2022挑战任务5(DCASE2022-T5)验证集上实现了62.73的F量,以优于基线原型网络34.02的性能。使用提出的方法,我们提交的系统在Dcase2022-T5中排名第二。本文的代码在https://github.com/haoheliu/dcase_2022_task_5上完全开源。
translated by 谷歌翻译
自动音频字幕是一项跨模式翻译任务,旨在为给定的音频剪辑生成自然语言描述。近年来,随着免费可用数据集的发布,该任务受到了越来越多的关注。该问题主要通过深度学习技术解决。已经提出了许多方法,例如研究不同的神经网络架构,利用辅助信息,例如关键字或句子信息来指导字幕生成,并采用了不同的培训策略,这些策略极大地促进了该领域的发展。在本文中,我们对自动音频字幕的已发表贡献进行了全面综述,从各种现有方法到评估指标和数据集。我们还讨论了公开挑战,并设想可能的未来研究方向。
translated by 谷歌翻译
自动音频标题(AAC)是一种跨模型翻译任务,旨在使用自然语言来描述音频剪辑的内容。如在DCEAD 2021挑战的任务6所接收的提交所示,这一问题已受到越来越兴趣的社区。现有的AAC系统通常基于编码器解码器架构,其中音频信号被编码为潜像表示,并与其对应的文本描述对齐,则使用解码器来生成标题。然而,AAC系统的培训经常遇到数据稀缺问题,这可能导致不准确的表示和音频文本对齐。为了解决这个问题,我们提出了一种名为对比损耗的新型编码器解码器框架(CL4AC)。在CL4AC中,通过对比样本来利用来自原始音频文本成对数据的自我监督信号来利用音频和文本之间的对应关系,该样本可以提高潜在表示的质量和音频和文本之间的对齐,同时训练有限的数据。实验是在披丁数据集上进行的,以显示我们提出的方法的有效性。
translated by 谷歌翻译
Neural-symbolic computing aims at integrating robust neural learning and sound symbolic reasoning into a single framework, so as to leverage the complementary strengths of both of these, seemingly unrelated (maybe even contradictory) AI paradigms. The central challenge in neural-symbolic computing is to unify the formulation of neural learning and symbolic reasoning into a single framework with common semantics, that is, to seek a joint representation between a neural model and a logical theory that can support the basic grounding learned by the neural model and also stick to the semantics of the logical theory. In this paper, we propose differentiable fuzzy $\mathcal{ALC}$ (DF-$\mathcal{ALC}$) for this role, as a neural-symbolic representation language with the desired semantics. DF-$\mathcal{ALC}$ unifies the description logic $\mathcal{ALC}$ and neural models for symbol grounding; in particular, it infuses an $\mathcal{ALC}$ knowledge base into neural models through differentiable concept and role embeddings. We define a hierarchical loss to the constraint that the grounding learned by neural models must be semantically consistent with $\mathcal{ALC}$ knowledge bases. And we find that capturing the semantics in grounding solely by maximizing satisfiability cannot revise grounding rationally. We further define a rule-based loss for DF adapting to symbol grounding problems. The experiment results show that DF-$\mathcal{ALC}$ with rule-based loss can improve the performance of image object detectors in an unsupervised learning way, even in low-resource situations.
translated by 谷歌翻译
在规范空间中对人体进行建模是捕捉和动画的常见实践。但是,当涉及神经辐射场(NERF)时,在规范空间中学习静态NERF是不够的,因为即使人体移动时,即使场景照明是恒定的,身体的照明也会变化。以前的方法通过学习人均嵌入来减轻照明的不一致,但是此操作并不能推广到看不见的姿势。鉴于照明条件在世界空间中是静态的,而人体在规范空间中是一致的,我们提出了一个双空间的nerf,该nerf在场景照明和人体中对两个单独空间的两个MLP进行建模。为了弥合这两个空间,以前的方法主要依赖于线性混合剥皮(LBS)算法。但是,动态神经场的LB的混合重量很难棘手,因此通常用另一个MLP记住,这不会推广到新型姿势。尽管可以借用参数网格(例如SMPL)的混合权重,但插值操作会引入更多的伪像。在本文中,我们建议使用Barycentric映射,该映射可以直接概括为看不见的姿势并出奇地取得了比具有神经混合重量的LB的优势。人类36M和ZJU-MOCAP数据集的定量和定性结果显示了我们方法的有效性。
translated by 谷歌翻译
我们提出了一个小说嵌入字段\ emph {pref}作为促进神经信号建模和重建任务的紧凑表示。基于纯的多层感知器(MLP)神经技术偏向低频信号,并依赖于深层或傅立叶编码以避免丢失细节。取而代之的是,基于傅立叶嵌入空间的相拟合公式,PREF采用了紧凑且物理上解释的编码场。我们进行全面的实验,以证明PERF比最新的空间嵌入技术的优势。然后,我们使用近似的逆傅里叶变换方案以及新型的parseval正常器来开发高效的频率学习框架。广泛的实验表明,我们的高效和紧凑的基于频率的神经信号处理技术与2D图像完成,3D SDF表面回归和5D辐射场现场重建相同,甚至比最新的。
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译